ITIF Logo
ITIF Search
There’s Little Evidence for Today’s AI Alarmism

There’s Little Evidence for Today’s AI Alarmism

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

What are we to make of the fact that Geoffrey Hinton, Sam Altman, and many other leading AI experts have lent their names to that one-sentence statement issued by the Center for AI Safety?[1] There’s no mention of AI’s benefits, just the word “extinction” and two terrifying analogies certain to excite the media, trouble the public, and cast doubt over America’s technological future. If there was ever a time to defend digital, this is it.

The sentence is worth deconstructing. The priority is “mitigating the risk of extinction.” “Mitigate” typically means that a problem can’t be eliminated, just reduced in the sense of mitigating pain. It’s true that the risks of pandemics and nuclear war can’t currently be eliminated, so we need to mitigate those threats as best we can. But it’s odd that so many scientists believe that the threat of AI-driven human extinction is similarly pressing and permanent. The horrors of viruses and atomic bombs are all too real, but many AI risks are still vague and speculative. Others seem quite manageable and much less deadly. Today’s alarmists have yet to provide compelling evidence to justify their cataclysmic fears.

Two Misleading Analogies

The pandemic analogy is flawed because AI is not a virus like Mydoom or WannaCry that must be contained or eradicated; it’s an extremely valuable tool that we want to use. Nor was Covid-19 successfully “mitigated”; supplies to treat it were not available; national decisions were chaotic, and many millions died or suffered. Neither the World Health Organization nor the Center for Disease Control acquitted themselves well. The former gave in to political pressures, while the latter sent mixed messages and lost the trust of many Americans. Do we really want similarly powerful organizations—or Congress—to oversee the diverse and fast-moving AI field?

The nuclear war analogy is flawed because AI isn’t primarily a military technology. Should we really try to stop AI proliferation in the same way we have tried (and sometimes failed) to prevent nuclear proliferation? Of course not. Additionally, the risk of nuclear war has long been mitigated by the doctrinal reality of Mutually Assured Destruction (MAD). Advanced AI-enabled missiles, drones, and lasers will surely become part of future MAD calculations, but this means that the military impact of AI is mostly an extension of today’s status quo. The big AI-driven changes will be in business and society where competition and consumer demand have always led to innovation, not the wary standoffs of MAD.

And One Missing One

It’s curious that the statement didn’t mention climate change as a “societal-scale” risk, even though this is a much more accurate analogy than either viruses or bombs. Like fossil fuels, AI is widely used, has a great many benefits, and comes with both real and potential downsides.[2] Whereas pandemics start suddenly and spread quickly, and nuclear bombs can be detonated at any time, the risks of Al—like climate change—typically build up over time. By raising the specters of Covid-19 and nuclear war, the statement creates a sense of urgency. In contrast, the climate change challenge has been characterized by decades of warnings, scientific panels, and political summits that have resulted in gradual business and societal change. The AI world has long issued similar, if lower-profile, warnings, and it’s developing similar groups and forums, most likely with similarly gradual results. This seems the best and most likely path forward.

All Too Familiar Fears

That great AI progress has generated great AI fears is hardly surprising. From Dr. Frankenstein’s monster to HAL in the movie 2001: A Space Odyssey, there have always been warnings that human-like inventions would eventually spin out of control. Today’s concerns can be grouped into at least a dozen broad categories:

1. AI-based automation will eliminate millions of white- and blue-collar jobs.

2. AI systems are inherently biased and discriminatory.

3. AI systems and algorithms are unaccountable and unexplainable.

4. AI will destroy privacy and lead to a surveillance state.

5. AI will lead to further increases in societal inequality.

6. AI-based deep fakes will undermine trust and disrupt politics and society.

7. Autonomous AI systems and weaponry will destabilize international relations.

8. Hostile powers will seek to dominate the world through AI.

9. AI lacks human values and ethics.

10. General artificial intelligence will soon surpass that of humans.

11. AI will diminish human worth.

12. AI systems will go rogue, take control of society, and make humans expendable.

Specific Solutions

Those fears are all important issues to consider. But they can also help us see how AI becomes less scary once specific threats are addressed separately, especially if their potential impact is far in the future. Consider the way the 12 scenarios above might prove largely manageable over time:

1. The developed world is in a prolonged productivity slump, with rapidly aging populations, so increased automation should be welcomed. More broadly, fears that automation will result in “the end of work” have always been proven wrong, and always will be as long as human wants are not fully met.[3] Surely, we can wait and see how jobs change, and whether there will be large-scale labor market disruption, or not.

2. In theory, AI biases can be corrected over time by improving the underlying data sets. As ITIF has long argued, over time, machines will prove much less biased than people.[4]

3. Whether the output of an AI system is fully explainable or not, organizations that develop and deploy AI will surely be held accountable, just as they are for their use of software, machines, robots, and other tools, materials, and processes. Of course, as with the Dark Web, terrorists, and the criminal underworld, anonymous entities will use new technologies to commit crimes and/or cause mayhem. It has ever been thus.

4. It’s America’s—and other nations’—choice whether to become more like China’s surveillance state or not. But as China has also shown, widespread societal surveillance can occur with or without advanced AI. Loss of privacy is not the inevitable result of AI, although it’s a real concern, and should be protected against through legislation.

5. Only time will tell whether AI becomes a major driver of income inequality, but fears that AI will be controlled by a few big companies seem unfounded as usage is already widespread around the world, and as open-source and low-cost options proliferate. Tech market power has always proved to be more ephemeral than it often seems.

6. Deep fakes are a serious problem, but solutions are under development, especially the use of AI to detect AI-based counterfeits. People will surely become more skeptical, “zero trust” viewers, just as they have because of Photoshop. The proliferation of deepfakes might even lead to people to rely more on known and trusted sources. AI-based plagiarism in schools can be mitigated simply by testing students in an offline room.

7. Just as the major nuclear powers have—or should have—hot lines to manage crisis situations, they will hopefully develop similar ways to control autonomous weaponry. It’s in everyone’s interest to avoid accidental and/or escalating conflicts. Similarly, multilateral conventions on landmines, biological weapons, and chemical weapons have established global norms that reduce risks, especially to civilians.

8. Values and ethics can be built into many AI systems and applications, although whose values and whose ethics will always be an issue, especially in today’s highly polarized times. As with social media, agreeing on what is true, fair, and/or responsible will never be easy or straightforward.

9. Given the way research is shared globally, it will probably be impossible for any one country to maintain a decisive AI edge in terms of data and algorithms, unless some countries intentionally opt out of advanced AI competition. China surely won’t.

10. Generalized machine intelligence that surpasses humans won’t happen for a very long time, if ever. There will be plenty of time to gauge the best path forward.

11. Even though humans are no match for computers, people still want to beat each other people at Chess, Go, and other games. Human strength and speed are still greatly admired even though machines are vastly stronger and faster.

12. The idea that AI systems will turn against humans remains the stuff of science fiction. While it’s made for some great movies, there’s little evidence to support the idea that humans would not remain in ultimate control.

Given all this, why are so many experts so alarmed, and even feeling guilty about their life-long work? My initial reaction was that they must be seeing something that I don’t. But many signatories to the Center for AI Safety’s one-sentence statement have taken to the airwaves to explain their concerns, and I haven’t heard anything beyond the familiar fears mentioned above. It was surprising how many cited the student essay writing issue, as if plagiarism was a new problem and there are no effective remedies. Deepfakes were also mentioned a lot, but I have not heard anyone talk about how this problem could be mitigated by the marketplace. In the end, the alarmism wasn’t convincing.

Covid-19 demonstrated the risks of making major policy decisions rapidly and under great pressure, which is why governments should be patient in responding to AI. The starting point should always be how well existing laws in health care, financial services, media, political campaigning, privacy, and other areas apply to AI. Additional AI guardrails such as auditing algorithms, ethics training, review boards, privacy and copyright policies, and the like can all play a role. But over time, we’ll have a much better sense of which AI areas require significant regulatory intervention and which do not. Focusing on a small number of actual problems is much easier than trying to anticipate what will happen across a wide range of complex AI domains. Right now, we would just be guessing.

About This Series

ITIF’s “Defending Digital” series examines popular criticisms, complaints, and policy indictments against the tech industry to assess their validity, correct factual errors, and debunk outright myths. Our goal in this series is not to defend tech reflexively or categorically, but to scrutinize widely echoed claims that are driving the most consequential debates in tech policy. Before enacting new laws and regulations, it’s important to ask: Do these claims hold water?

About the Author

David Moschella is a non-resident senior fellow at ITIF. Previously, he was head of research at the Leading Edge Forum, where he explored the global impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. Before that, David was the worldwide research director for IDC, the largest market analysis firm in the information technology industry. His books include Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s (DXC, 2018), Customer-Driven IT (Harvard Business School Press, 2003), and Waves of Power (Amacom, 1997).

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent, nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized by its peers in the think tank community as the global center of excellence for science and technology policy, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit us at www.itif.org.

Endnotes

[1].       Center for AI Safety, “Statement on AI Risk,” accessed June 7, 2023, https://www.safe.ai/statement-on-ai-risk.

[2].       David Moschella, “Data Isn’t the New Oil; That Might Be a Good Thing,” ITIF Defending Digital Series, no. 18, May 30, 2023, https://itif.org/publications/2023/05/30/data-isnt-the-new-oil-that-might-be-a-good-thing/.

[3].       Robert D. Atkinson, “Robots, Automation, and Jobs: A Primer for Policymakers” (ITIF, May 2017), https://itif.org/publications/2017/05/08/robots-automation-and-jobs-primer-policymakers/

[4].       David Moschella, “AI Bias Is Correctable. Human Bias? Not So Much,” ITIF, Defending Digital Series, no. 5, April 25, 2022, https://itif.org/publications/2022/04/25/ai-bias-correctable-human-bias-not-so-much/.

Back to Top